379 research outputs found

    On the context-dependent nature of the contribution of the ventral premotor cortex to speech perception

    Get PDF
    What is the nature of the interface between speech perception and production, where auditory and motor representations converge? One set of explanations suggests that during perception, the motor circuits involved in producing a perceived action are in some way enacting the action without actually causing movement (covert simulation) or sending along the motor information to be used to predict its sensory consequences (i.e., efference copy). Other accounts either reject entirely the involvement of motor representations in perception, or explain their role as being more supportive than integral, and not employing the identical circuits used in production. Using fMRI, we investigated whether there are brain regions that are conjointly active for both speech perception and production, and whether these regions are sensitive to articulatory (syllabic) complexity during both processes, which is predicted by a covert simulation account. A group of healthy young adults (1) observed a female speaker produce a set of familiar words (perception), and (2) observed and then repeated the words (production). There were two types of words, varying in articulatory complexity, as measured by the presence or absence of consonant clusters. The simple words contained no consonant cluster (e.g. ā€œpalaceā€), while the complex words contained one to three consonant clusters (e.g. ā€œplanetā€). Results indicate that the left ventral premotor cortex (PMv) was significantly active during speech perception and speech production but that activation in this region was scaled to articulatory complexity only during speech production, revealing an incompletely specified efferent motor signal during speech perception. The right planum temporal (PT) was also active during speech perception and speech production, and activation in this region was scaled to articulatory complexity during both production and perception. These findings are discussed in the context of current theories of speech perception, with particular attention to accounts that include an explanatory role for mirror neurons

    Motor Response Selection in Overt Sentence Production: A Functional MRI Study

    Get PDF
    Many different cortical areas are thought to be involved in the process of selecting motor responses, from the inferior frontal gyrus, to the lateral and medial parts of the premotor cortex. The objective of the present study was to examine the neural underpinnings of motor response selection in a set of overt language production tasks. To this aim, we compared a sentence repetition task (externally constrained selection task) with a sentence generation task (volitional selection task) in a group of healthy adults. In general, the results clarify the contribution of the pre-SMA, cingulate areas, PMv, and pars triangularis to the process of selecting motor responses in the context of sentence production, and shed light on the manner in which this network is modulated by selection mode. Further, the present study suggests that response selection in sentence production engages neural resources similar to those engaged in the production of isolated words and oral motor gestures

    Gestureā€™s Neural Language

    Get PDF
    When people talk to each other, they often make arm and hand movements that accompany what they say. These manual movements, called ā€œco-speech gestures,ā€ can convey meaning by way of their interaction with the oral message. Another class of manual gestures, called ā€œemblematic gesturesā€ or ā€œemblems,ā€ also conveys meaning, but in contrast to co-speech gestures, they can do so directly and independent of speech. There is currently significant interest in the behavioral and biological relationships between action and language. Since co-speech gestures are actions that rely on spoken language, and emblems convey meaning to the effect that they can sometimes substitute for speech, these actions may be important, and potentially informative, examples of languageā€“motor interactions. Researchers have recently been examining how the brain processes these actions. The current results of this work do not yet give a clear understanding of gesture processing at the neural level. For the most part, however, it seems that two complimentary sets of brain areas respond when people see gestures, reflecting their role in disambiguating meaning. These include areas thought to be important for understanding actions and areas ordinarily related to processing language. The shared and distinct responses across these two sets of areas during communication are just beginning to emerge. In this review, we talk about the ways that the brain responds when people see gestures, how these responses relate to brain activity when people process language, and how these might relate in normal, everyday communication

    Structural correlates of spoken language abilities : a surface-based region-of interest morphometry study

    Get PDF
    Brain structure can predict many aspects of human behavior, though the extent of this relationship in healthy adults, particularly for language-related skills, remains largely unknown. The objective of the present study was to explore this relation using magnetic resonance imaging (MRI) on a group of 21 healthy young adults who completed two language tasks: (1) semantic fluency and (2) sentence generation. For each region of interest, cortical thickness, surface area, and volume were calculated. The results show that verbal fluency scores correlated mainly with measures of brain morphology in the left inferior frontal cortex and bilateral insula. Sentence generation scores correlated with structure of the left inferior parietal and right inferior frontal regions. These results reveal that the anatomy of several structures in frontal and parietal lobes is associated with spoken language performance. The presence of both negative and positive correlations highlights the complex relation between brain and language

    Models of Speech Processing

    Get PDF
    One of the fundamental questions about language is how listeners map the acoustic signal onto syllables, words, and sentences, resulting in understanding of speech. For normal listeners, this mapping is so effortless that one rarely stops to consider just how it takes place. However, studies of speech have shown that this acoustic signal contains a great deal of underlying complexity. A number of competing models seek to explain how these intricate processes work. Such models have often narrowed the problem to mapping the speech signal onto isolated words, setting aside the complexity of segmenting continuous speech. Continuous speech has presented a significant challenge for many models because of the high variability of the signal and the difficulties involved in resolving the signal into individual words. The importance of understanding speech becomes particularly apparent when neurological disease affects this seemingly basic ability. Lesion studies have explored impairments of speech sound processing to determine whether deficits occur in perceptual analysis of acoustic-phonetic information or in stored abstract phonological representations (e.g., Basso, Casati,& Vignolo, 1977; Blumstein, Cooper, Zurif,& Caramazza, 1977). Furthermore, researchers have attempted to determine in what ways underlying phonological/phonetic impairments may contribute to auditory comprehension deficits (Blumstein, Baker, & Goodglass, 1977). In this chapter, we discuss several psycholinguistic models of word recognition (the process of mapping the speech signal onto the lexicon), and outline how components of such models might correspond to the functional anatomy of the brain. We will also relate evidence from brain lesion and brain activation studies to components of such models. We then present some approaches that deal with speech perception more generally, and touch on a few current topics of debate.National Institutes of Health under grant NIH DC R01ā€“3378 to the senior author (SLS

    A Network Model of Observation and Imitation of Speech

    Get PDF
    Much evidence has now accumulated demonstrating and quantifying the extent of shared regional brain activation for observation and execution of speech. However, the nature of the actual networks that implement these functions, i.e., both the brain regions and the connections among them, and the similarities and differences across these networks has not been elucidated. The current study aims to characterize formally a network for observation and imitation of syllables in the healthy adult brain and to compare their structure and effective connectivity. Eleven healthy participants observed or imitated audiovisual syllables spoken by a human actor. We constructed four structural equation models to characterize the networks for observation and imitation in each of the two hemispheres. Our results show that the network models for observation and imitation comprise the same essential structure but differ in important ways from each other (in both hemispheres) based on connectivity. In particular, our results show that the connections from posterior superior temporal gyrus and sulcus to ventral premotor, ventral premotor to dorsal premotor, and dorsal premotor to primary motor cortex in the left hemisphere are stronger during imitation than during observation. The first two connections are implicated in a putative dorsal stream of speech perception, thought to involve translating auditory speech signals into motor representations. Thus, the current results suggest that flow of information during imitation, starting at the posterior superior temporal cortex and ending in the motor cortex, enhances input to the motor cortex in the service of speech execution

    Parallel Workflows for Data-Driven Structural Equation Modeling in Functional Neuroimaging

    Get PDF
    We present a computational framework suitable for a data-driven approach to structural equation modeling (SEM) and describe several workflows for modeling functional magnetic resonance imaging (fMRI) data within this framework. The Computational Neuroscience Applications Research Infrastructure (CNARI) employs a high-level scripting language called Swift, which is capable of spawning hundreds of thousands of simultaneous R processes (R Development Core Team, 2008), consisting of self-contained SEMs, on a high performance computing system (HPC). These self-contained R processing jobs are data objects generated by OpenMx, a plug-in for R, which can generate a single model object containing the matrices and algebraic information necessary to estimate parameters of the model. With such an infrastructure in place a structural modeler may begin to investigate exhaustive searches of the model space. Specific applications of the infrastructure, statistics related to model fit, and limitations are discussed in relation to exhaustive SEM. In particular, we discuss how workflow management techniques can help to solve large computational problems in neuroimaging

    Intensive Language Therapy for Nonfluent Aphasia With And Without Surgical Implantation of an Investigational Cortical Stimulation Device: Preliminary Language and Imaging Results

    Get PDF
    This randomized clinical trial evaluated the feasibility of targeted epidural cortical stimulation delivered concurrently with speech-language therapy (SLT) in four subjects with chronic Brocaā€™s aphasia. Four matched controls received identical SLT without stimulation. Investigational subjects showed a mean WAB-AQ change of 8.0 points immediately post-therapy and at 6-week follow-up, and 12.3 points at 12-week follow-up. The control groupā€™s mean WAB-AQ change was 4.6, 5.5, and 3.6 points, respectively. Similar patterns of change were noted on the Communicative Effectiveness Index. fMRI changes suggested differential reorganization. Cortical stimulation in combination with intensive SLT may enhance language rehabilitation for chronic Brocaā€™s aphasia

    Rule-based treatment for acquired phonological dyslexia

    Get PDF
    In the context of a multiple-baseline design, this study demonstrated the positive effects of behavioural treatment using grapheme to phoneme correspondence rules to treat a patient with phonological dyslexia 17 years after stroke onset. Treatment used repeated exposure to real and nonsense word stimuli embodying the regularities of two grapheme to phoneme correspondence rules (GPCR) with hierarchical cueing and knowledge of results. Results revealed a pattern of performance that increased beyond baseline variability and coincided in time with the institution of treatment. Generalization of these treatment effects occurred to words requiring knowledge of other GPCR and to an independent processing based reading measure
    • ā€¦
    corecore